Point-of-Care Ultrasound (POCUS) refers to clinician-performed and interpreted ultrasonography at the patient's bedside. Interpreting these images requires a high level of expertise, which may not be available during emergencies. In this paper, we support POCUS by developing classifiers that can aid medical professionals by diagnosing whether or not a patient has pneumothorax. We decomposed the task into multiple steps, using YOLOv4 to extract relevant regions of the video and a 3D sparse coding model to represent video features. Given the difficulty in acquiring positive training videos, we trained a small-data classifier with a maximum of 15 positive and 32 negative examples. To counteract this limitation, we leveraged subject matter expert (SME) knowledge to limit the hypothesis space, thus reducing the cost of data collection. We present results using two lung ultrasound datasets and demonstrate that our model is capable of achieving performance on par with SMEs in pneumothorax identification. We then developed an iOS application that runs our full system in less than 4 seconds on an iPad Pro, and less than 8 seconds on an iPhone 13 Pro, labeling key regions in the lung sonogram to provide interpretable diagnoses.
translated by 谷歌翻译
Large language models (LLMs) have demonstrated excellent zero-shot generalization to new language tasks. However, effective utilization of LLMs for zero-shot visual question-answering (VQA) remains challenging, primarily due to the modality disconnection and task disconnection between LLM and VQA task. End-to-end training on vision and language data may bridge the disconnections, but is inflexible and computationally expensive. To address this issue, we propose \emph{Img2Prompt}, a plug-and-play module that provides the prompts that can bridge the aforementioned modality and task disconnections, so that LLMs can perform zero-shot VQA tasks without end-to-end training. In order to provide such prompts, we further employ LLM-agnostic models to provide prompts that can describe image content and self-constructed question-answer pairs, which can effectively guide LLM to perform zero-shot VQA tasks. Img2Prompt offers the following benefits: 1) It can flexibly work with various LLMs to perform VQA. 2)~Without the needing of end-to-end training, it significantly reduces the cost of deploying LLM for zero-shot VQA tasks. 3) It achieves comparable or better performance than methods relying on end-to-end training. For example, we outperform Flamingo~\cite{Deepmind:Flamingo2022} by 5.6\% on VQAv2. On the challenging A-OKVQA dataset, our method even outperforms few-shot methods by as much as 20\%.
translated by 谷歌翻译
Machine learning models usually assume i.i.d data during training and testing, but data and tasks in real world often change over time. To emulate the transient nature of real world, we propose a challenging but practical task: text classification in-the-wild, which introduces different non-stationary training/testing stages. Decomposing a complex task into modular components can enable robust generalisation under such non-stationary environment. However, current modular approaches in NLP do not take advantage of recent advances in parameter efficient tuning of pretrained language models. To close this gap, we propose MODULARPROMPT, a label-modular prompt tuning framework for text classification tasks. In MODULARPROMPT, the input prompt consists of a sequence of soft label prompts, each encoding modular knowledge related to the corresponding class label. In two of most formidable settings, MODULARPROMPT outperforms relevant baselines by a large margin demonstrating strong generalisation ability. We also conduct comprehensive analysis to validate whether the learned prompts satisfy properties of a modular representation.
translated by 谷歌翻译
AI methods are used in societally important settings, ranging from credit to employment to housing, and it is crucial to provide fairness in regard to algorithmic decision making. Moreover, many settings are dynamic, with populations responding to sequential decision policies. We introduce the study of reinforcement learning (RL) with stepwise fairness constraints, requiring group fairness at each time step. Our focus is on tabular episodic RL, and we provide learning algorithms with strong theoretical guarantees in regard to policy optimality and fairness violation. Our framework provides useful tools to study the impact of fairness constraints in sequential settings and brings up new challenges in RL.
translated by 谷歌翻译
我们介绍了Lavis,这是一个开源深度学习库,用于语言视觉研究和应用。拉维斯(Lavis)的目标是作为一个一站式综合图书馆,它为研究人员和从业人员提供了可访问语言视觉领域的最新进步,并赋予未来的研究和发展。它具有统一的界面,可轻松访问最新的图像语言,视频语言模型和常见数据集。 Lavis支持对各种任务的培训,评估和基准测试,包括多模式分类,检索,字幕,视觉问题答案,对话和预训练。同时,该库还高度可扩展且可配置,从而促进了未来的开发和定制。在此技术报告中,我们描述了图书馆的设计原理,关键组成部分和功能,并在常见的语言视觉任务中提出基准测试结果。该库可在以下网址获得:https://github.com/salesforce/lavis。
translated by 谷歌翻译
根据互补学习系统(CLS)理论〜\ cite {mcclelland1995there}在神经科学中,人类通过两个补充系统有效\ emph {持续学习}:一种快速学习系统,以海马为中心,用于海马,以快速学习细节,个人体验,个人体验,个人体验,个人体验,个人体验,个人体验,个人体验,个人体验的快速学习, ;以及位于新皮层中的缓慢学习系统,以逐步获取有关环境的结构化知识。在该理论的激励下,我们提出\ emph {dualnets}(对于双网络),这是一个一般的持续学习框架,该框架包括一个快速学习系统,用于监督从特定任务和慢速学习系统中的模式分离代表学习,用于表示任务的慢学习系统 - 不可知论的一般代表通过自我监督学习(SSL)。双网符可以无缝地将两种表示类型纳入整体框架中,以促进在深层神经网络中更好地持续学习。通过广泛的实验,我们在各种持续的学习协议上展示了双网络的有希望的结果,从标准离线,任务感知设置到具有挑战性的在线,无任务的场景。值得注意的是,在Ctrl〜 \ Cite {veniat2020202020202020202020202020202020202020202020202020202020202021- coite {ostapenko2021-continual}的基准中。此外,我们进行了全面的消融研究,以验证双nets功效,鲁棒性和可伸缩性。代码可在\ url {https://github.com/phquang/dualnet}上公开获得。
translated by 谷歌翻译
本文研究了一个开放的研究问题,即生成文本图像对,以改善细粒度对文本跨模式检索任务的训练,并通过发现stylegan2模型的隐藏语义信息,为配对数据增强的新颖框架提出了一个新颖的框架。 。具体来说,我们首先在给定数据集上训练stylegan2模型。然后,我们将真实图像投影回stylegan2的潜在空间,以获取潜在的代码。为了使生成的图像可操作,我们进一步引入了一个潜在的空间对齐模块,以了解StyleGAN2潜在代码和相应的文本字幕功能之间的对齐。当我们进行在线配对数据增强时,我们首先通过随机代码替换生成增强文本,然后将增强文本传递到潜在的空间对齐模块中以输出潜在代码,最终将其馈送到stylegan2以生成增强图像。我们评估了增强数据方法对两个公共跨模式检索数据集的功效,其中有希望的实验结果表明,可以将增强的文本图像对数据与原始数据一起训练,以增强图像到文本交叉 - 模态检索性能。
translated by 谷歌翻译
在本文中,我们调查了一项开放的研究任务,该任务是从单个2D GAN产生人体面部且没有3D监督的3D卡通面部形状,在那里我们还可以操纵3D形状的面部表情。为此,我们发现了Stylegan潜在空间的语义含义,因此我们能够通过控制潜在代码来产生各种表达式,姿势和照明的面部图像。具体而言,我们首先对卡通数据集中预验证的Stylegan脸部模型进行了修复。通过将相同的潜在代码喂入面部和卡通生成模型,我们的目标是实现从2D人脸图像到卡通风格的化身的翻译。然后,我们发现了甘恩潜在空间的语义方向,以试图在保留原始身份的同时改变面部表情。由于我们没有任何针对卡通脸的3D注释,因此我们操纵潜在代码以生成具有不同姿势和照明的图像,以便我们可以重建3D卡通脸部形状。我们在定性和定量上验证了方法在三个卡通数据集上的疗效。
translated by 谷歌翻译
程序合成或代码生成旨在生成满足问题规范的程序。使用大规模预处理的语言模型(LMS)的最新方法显示出令人鼓舞的结果,但它们有一些关键的局限性。特别是,他们经常遵循标准监督的微调程序,仅从对自然语言问题描述和基础真相计划对培训代码生成模型。这种范式在很大程度上忽略了问题规范中的一些重要但潜在的信号,例如单位测试,因此在求解复杂的看不见的编码任务时通常会导致性能差。为了解决这些局限性,我们提出了“ Coderl”,这是通过验证的LMS和深入强化学习(RL)实现程序合成任务的新框架。具体而言,在培训期间,我们将代码生成的LM视为参与者网络,并引入批评网络,该网络经过培训,以预测生成的程序的功能正确性,并为演员提供密集的反馈信号。在推理期间,我们引入了一种新一代程序,具有关键的抽样策略,该过程允许模型根据示例单位测试和评论家分数的反馈自动重新生成程序。对于模型骨架,我们扩展了Codet5的编码器架构,具有增强的学习目标,更大的模型大小和更好的预处理数据。我们的方法不仅在具有挑战性的应用程序基准上实现了新的SOTA结果,而且还显示出强大的零弹性传输能力,并在简单的MBPP基准上具有新的SOTA结果。
translated by 谷歌翻译
多元时间序列中的异常检测在监视各种现实世界系统(例如IT系统运营或制造业)的行为方面起着重要作用。先前的方法对关节分布进行建模,而无需考虑多元时间序列的潜在机制,使它们变得复杂且饥饿。在本文中,我们从因果的角度提出异常检测问题,并将异常视为未遵循常规因果机制来生成多元数据的情况。然后,我们提出了一种基于因果关系的异常检测方法,该方法首先从数据中学习因果结构,然后渗透实例是否是相对于局部因果机制的异常,以从其直接原因产生每个变量,其条件分布可以直接估计从数据。鉴于因果系统的模块化特性,原始问题被分为一系列单独的低维异常检测问题,因此可以直接识别出异常的地方。我们通过模拟和公共数据集以及有关现实世界中AIOPS应用程序的案例研究评估我们的方法,显示其功效,鲁棒性和实际可行性。
translated by 谷歌翻译